Goto

Collaborating Authors

 mask training


A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Neural Information Processing Systems

As we described in Section 3.2.2 of the main paper, we realize mask training via binarization in In practice, we control the sparsity in a local way, i.e., all the weight matrices We have introduced the PoE method in Section 3.3. Work was done when Y uanxin Liu was a graduate student of IIE, CAS. We utilize eight datasets from three NLU tasks. Tab. 2 shows the distribution of examples over classes. We use two types of GPU, i.e., Nvidia V100 and TIT AN RTX.



A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Neural Information Processing Systems

As we described in Section 3.2.2 of the main paper, we realize mask training via binarization in In practice, we control the sparsity in a local way, i.e., all the weight matrices We have introduced the PoE method in Section 3.3. Work was done when Y uanxin Liu was a graduate student of IIE, CAS. We utilize eight datasets from three NLU tasks. Tab. 2 shows the distribution of examples over classes. We use two types of GPU, i.e., Nvidia V100 and TIT AN RTX.


A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Neural Information Processing Systems

In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance. Such subnetworks can be found in three scenarios: 1) the fine-tuned PLMs, 2) the raw PLMs and then fine-tuned in isolation, and even inside 3) PLMs without any parameter fine-tuning. However, these results are only obtained in the in-distribution (ID) setting.


Beyond 2:4: exploring V:N:M sparsity for efficient transformer inference on GPUs

Zhao, Kang, Yuan, Tao, Bao, Han, Su, Zhenfeng, Gao, Chang, Sun, Zhaofeng, Liang, Zichen, Jing, Liping, Chen, Jianfei

arXiv.org Artificial Intelligence

To date, 2:4 sparsity has stood as the only sparse pattern that can be accelerated using sparse tensor cores on GPUs. In practice, 2:4 sparsity often possesses low actual speedups ($\leq 1.3$) and requires fixed sparse ratios, meaning that other ratios, such as 4:8, 8:16, or those exceeding 50% sparsity, do not incur any speedups on GPUs. Recent studies suggest that V:N:M sparsity is promising in addressing these limitations of 2:4 sparsity. However, regarding accuracy, the effects of V:N:M sparsity on broader Transformer models, such as vision Transformers and large language models (LLMs), are largely unexamined. Moreover, Some specific issues related to V:N:M sparsity, such as how to select appropriate V and M values, remain unresolved. In this study, we thoroughly investigate the application of V:N:M sparsity in vision models and LLMs across multiple tasks, from pertaining to downstream tasks. We propose three key approaches to enhance the applicability and accuracy of V:N:M-sparse Transformers, including heuristic V and M selection, V:N:M-specific channel permutation, and three-staged LoRA training techniques. Experimental results show that, with our methods, the DeiT-small achieves lossless accuracy at 64:2:5 sparsity, while the DeiT-base maintains accuracy even at 64:2:8 sparsity. In addition, the fine-tuned LLama2-7B at 64:2:5 sparsity performs comparably or better than training-free 2:4 sparse alternatives on downstream tasks. More importantly, V:N:M-sparse Transformers offer a wider range of speedup-accuracy trade-offs compared to 2:4 sparsity. Overall, our exploration largely facilitates the V:N:M sparsity to act as a truly effective acceleration solution for Transformers in cost-sensitive inference scenarios.


A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models

Liu, Yuanxin, Meng, Fandong, Lin, Zheng, Li, Jiangnan, Fu, Peng, Cao, Yanan, Wang, Weiping, Zhou, Jie

arXiv.org Artificial Intelligence

Despite the remarkable success of pre-trained language models (PLMs), they still face two challenges: First, large-scale PLMs are inefficient in terms of memory footprint and computation. Second, on the downstream tasks, PLMs tend to rely on the dataset bias and struggle to generalize to out-of-distribution (OOD) data. In response to the efficiency problem, recent studies show that dense PLMs can be replaced with sparse subnetworks without hurting the performance. Such subnetworks can be found in three scenarios: 1) the fine-tuned PLMs, 2) the raw PLMs and then fine-tuned in isolation, and even inside 3) PLMs without any parameter fine-tuning. However, these results are only obtained in the in-distribution (ID) setting. In this paper, we extend the study on PLMs subnetworks to the OOD setting, investigating whether sparsity and robustness to dataset bias can be achieved simultaneously. To this end, we conduct extensive experiments with the pre-trained BERT model on three natural language understanding (NLU) tasks. Our results demonstrate that \textbf{sparse and robust subnetworks (SRNets) can consistently be found in BERT}, across the aforementioned three scenarios, using different training and compression methods. Furthermore, we explore the upper bound of SRNets using the OOD information and show that \textbf{there exist sparse and almost unbiased BERT subnetworks}. Finally, we present 1) an analytical study that provides insights on how to promote the efficiency of SRNets searching process and 2) a solution to improve subnetworks' performance at high sparsity. The code is available at https://github.com/llyx97/sparse-and-robust-PLM.